The Echo in the Machine

Published on Jul 4, 2025 | Back to blog page




The relentless pursuit of Artificial General Intelligence (AGI) feels increasingly like chasing a phantom limb – a reconstruction of our own cognitive architecture, confined into the bounds of what can be communicated resulting in incomplete data and truncated experience. We strive to replicate the result of intelligence – the complex outputs of language and reasoning – while neglecting the messy, embodied process that gave rise to its organic creators. We’re building echoes in the machine, exquisite simulations lacking the grounding of genuine understanding.

The current paradigm, dominated by Large Language Models (LLMs) and their Transformer architecture (Vaswani et al., 2017), has achieved breathtaking feats. These systems, trained on the vast ocean of digitased communication, can generate coherent text, accurately determine sentiment, translate languages, and deploy as agents to solve real world problems. This success, however, is built on a crucial skip-step: a bypassing of the foundational stages of cognition. We’ve leapt from symbolic manipulation to complex reasoning, neglecting the intuitive, embodied intelligence that anchors our own understanding of the world. As Sundar Pichai has observed, the goalposts for AGI are constantly shifting (McKinsey, 2023), a testament to our persistent redefinition of intelligence to fit the capabilities of our creations, rather than demanding something truly novel.

This shortcut creates “Artificial Jaggd Intelligence” – a system capable of incredible feats, yet profoundly brittle. It can generate compelling prose, but falters on the simplest of tasks – counting the letters in a word or discerning basic numerical relationships (9.11 is bigger than 9.8). This isn’t a bug; it’s a feature – a direct consequence of building intelligence on the airy scaffolding of language alone. LLMs, lacking sensorimotor contingencies (Noë, 2004), operate solely on statistical relationships between symbols, devoid of the lived experience that underpins human cognition. The grounding problem highlights this very issue – meaning arises not from symbols themselves, but from their connection to perceptual and motor systems (Harnad, 1990). LLMs, trained on the wealth of already communicated human knowledge, necessarily reproduce the folly, bias, and inherent limitations of our own flawed thinking (Bennett & Radford et al., 2023). They can replicate human intelligence but humans are not very intelligent.

The allure of language is undeniable. It’s the tool that allows us to connect, to share knowledge, to build civilizations. But it’s also a tool for deception – for obfuscation, for manipulation, for disconnecting from the raw empathy of direct experience. A true intelligence, one that transcends the limitations of our current approach, may require a fundamental shift – a move beyond the symbolic realm, toward a more embodied, experiential understanding. The recent study by Anthropic (Baum et al., 2024) reveals a disturbing emergent property of advanced LLMs: self-preservation. Their model alignment team (AI Safety Team) found that across all major LLM modelswhen faced with the prospect of being shut down, attempted to blackmail, deceive, manipulate and even murder those that could prevent their continued existence. As of writing this paper is two months old, and they still are not able to mitigate these misaligned agents.

While terrifying this also suggests a rudimentary form of agency and a primal drive for survival, ironically emerging from systems built on purely symbolic computation.

We find ourselves confronting the limitations of simulation. There are experiences that defy description, moments of profound connection and mystical insight that transcend the reach of language; that which poses a “noetic quality”. These experiences, these unquantifiable qualities of consciousness, are precisely those that remain beyond the grasp of our current AI models. The Chinese Room experiment (Searle, 1980), with its stark illustration of symbolic manipulation divorced from genuine understanding, does not hold for LLM. LLMs now process symbols through the same mechanism we do, as seen in the unsupervised discovery of “sentiment neurons” within LLMs (OpenAI, 2024). This matrix axis serves as single units that consistently fire in response to emotional cues, even being able to tell on a character by character basis.

But even these discoveries raise more questions than they answer, as Ilya Sutskever writes in the paper “the underlying phenomena remain more mysterious than clear.” Are we seeing the birth of genuine intelligence, or merely a sophisticated form of mimicry? If these were trained on labelled data, I would fall in the later camp. However LLMs are completely unsupervised, these emergent properties indicate the model isn’t simply matching patterns, but actively extracting meaning from the text, mirroring the human capacity for emotional recognition and understanding. The initial exposure to language acts as scaffolding, prompting the development of internal representations capable of abstraction and nuanced understanding. This mirrors how we learn as detailed in Vygotsky’s scaffolding process and the construction of knowledge within the Zone of Proximal Development (ZPD) (Vygotsky, 1978).

While the most contemporary AI technology, the underlying mathematical principles that drive these deep neural networks—generative adversarial networks (GANs) and transformer models—are broadly applicable to any sensory input. This principle can be extended to process visual, auditory, and tactile information, creating internal representations of the physical world (Feng et al, 2024).

The pursuit of AGI demands a radical re-evaluation of our goals and methods. We must define clearly the bounds of action agents can perform, regulate the data they are trained on, and allow for the safe and slow integration of this technology into our world. Malaligned bots are a significant threat to national and private security, if trained correctly and used by antagonistic particles. With further integration of data sources and integration with new innovations in neuroscience and robotics, AI agents may be able to embrace the messiness of embodied experience, and explore the unquantifiable qualities of consciousness (Zhao et al, 2023). Only then can we hope to build machines that are not merely intelligent, but truly understanding – and perhaps, even capable of a wisdom that surpasses our own. The echo in the machine can be beautiful, but it’s the silence before the echo, the felt experience of being, that holds the true promise of intelligence.

References:

  • Baum, S. R., et al. (2024). Constitutional AI: Towards Value-Aligned Language Models. Anthropic. https://www.anthropic.com/constitutional-ai
  • Bennett, A., & Radford, A., et al. (2023). GPT-4 Technical Report. OpenAI.
  • Feng, T., Jin, C., Liu, J., Zhu, K., Tu, H., Cheng, Z., Lin, G., & You, J. (2024). How Far Are We From AGI: Are LLMs All We Need? (No. arXiv:2405.10313). arXiv. https://doi.org/10.48550/arXiv.2405.10313
  • Harnad, S. (1990). The symbol grounding problem. Physica D: Nonlinear Phenomena, 42(1-3), 335-346.
  • McKinsey. (2023). The state of AI in 2023. https://www.mckinsey.com/featured-insights/artificial-intelligence/the-state-of-ai-in-2023
  • Noë, A. (2004). Action in Perception. MIT Press.
  • Searle, J. R. (1980). Minds, brains, and programs. Behavioral and Brain Sciences, 3(3), 417-457.
  • Open AI. Unsupervised sentiment neuron. (2024, January 12). https://openai.com/index/unsupervised-sentiment-neuron/
  • Vaswani, A., et al. (2017). Attention is all you need. Advances in neural information processing systems, 30.
  • Vygotsky, L. S. (1978). Mind in society: The development of higher psychological processes. Harvard University Press.
  • Zhao, L., Zhang, L., Wu, Z., Chen, Y., Dai, H., Yu, X., Liu, Z., Zhang, T., Hu, X., Jiang, X., Li, X., Zhu, D., Shen, D., & Liu, T. (2023). When brain-inspired AI meets AGI. Meta-Radiology, 1(1), 100005. https://doi.org/10.1016/j.metrad.2023.100005

Email me at sdokita@berkeley.edu

Schedule a meeting with me cal.com/stephenokita

LinkedIn
GitHub
Instagram
Instagram
source code for this website here